36 research outputs found
Countering Code Injection Attacks With Instruction Set Randomization
We describe a new, general approach for safeguarding systems against any type of code-injection attack. We apply Kerckhoff's principle, by creating process-specific randomized instruction sets (e.g., machine instructions) of the system executing potentially vulnerable software. An attacker who does not know the key to the randomization algorithm will inject code that is invalid for that randomized processor, causing a runtime exception. To determine the difficulty of integrating support for the proposed mechanism in the operating system, we modified the Linux kernel, the GNU binutils tools, and the bochs-x86 emulator. Although the performance penalty is significant, our prototype demonstrates the feasibility of the approach, and should be directly usable on a suitable-modified processor (e.g., the Transmeta Crusoe).Our approach is equally applicable against code-injecting attacks in scripting and interpreted languages, e.g., web-based SQL injection. We demonstrate this by modifying the Perl interpreter to permit randomized script execution. The performance penalty in this case is minimal. Where our proposed approach is feasible (i.e., in an emulated environment, in the presence of programmable or specialized hardware, or in interpreted languages), it can serve as a low-overhead protection mechanism, and can easily complement other mechanisms
Recommended from our members
Survivor: An Approach for Adding Dependability to Legacy Workflow Systems
Although they often provide critical services, most workflow systems are not dependable. There has been much literature on dependable/survivable distributed systems; most is concerned with developing new architectures, not adapting pre-existing ones. Additionally, the literature is focused on hardening, security-based defense, as opposed to recovery. For deployed systems, it is often infeasible to completely replace existing infrastructures; what is more pragmatic are ways in which existing distributed systems can be adapted to offer better dependability. In this paper, we outline a general architecture that can easily be retrofitted to legacy workflow systems in order to improve dependability and fault tolerance. We do this by monitoring enactment and replicating partial workflow states as tools for detection, analysis and recovery. We discuss some policies that can guide these mechanisms. Finally, we describe and evaluate our implementation, Survivor, which modified an existing workflow system provided by the Naval Research Lab
Recommended from our members
CASPER: Compiler-Assisted Securing of Programs at Runtime
Ensuring the security and integrity of computer systems deployed on the Internet is growing harder. This is especially true in the case of server systems based on open source projects like Linux, Apache, Sendmail, etc. since it is easier for a hacker to get access to the binary format of deployed applications if the source code to the software is publicly accessible. Often, having a binary copy of an application program is enough to help locate security vulnerabilities in the program. In the case of legacy systems where the source code is not available, advanced reverse-engineering and decompilation techniques can be used to construct attacks. This paper focuses on measures that reduce the effectiveness of hackers at conducting large-scale, distributed attacks. The first line of defense involves additional runtime checks that are able to counteract the majority of hacking attacks. Introducing diversity in deployed systems to severely diminish the probability of propagation to other systems helps to prevent effective attacks like the DDOS attack against the DNS root servers in October 21, 2002
Preventing Attacks on Machine Readable Travel Documents (MRTDs)
After the terror attacks of 9/11, the U.S. Congress passed legislation
that requires in the US Visa Waiver Program to begin issuing issuing machine readable passports that are tamper resistant and incorporate biometric and document authentication identifiers. The International Civil Aviation Organization (ICAO) has issued specifications for Machine Readable Travel Documents (MRTD) that are equipped with a smart card processor to perform biometric identification of the holder. Some countries, such as the United States, will issue
machine readable passports that serve only as passports. Other countries, such as the United Kingdom, intend to issue more sophisticated, multi-application passports that can also serve as national identity cards. We have conducted a detailed security analysis of these specificationsm, and we illustrate possible scenarios that could cause a compromise in the security and privacy of holders of such travel documents. Finally, we suggest improved cryptographic protocols and high-assurance smart card operating systems to prevent these compromises and to support electronic visas as well as passports
Recommended from our members
An Approach to Autonomizing Legacy Systems
Adding adaptation capabilities to existing distributed systems is a major concern. The question addressed here is how to retrofit existing systems with self-healing, adaptation and/or self management capabilities. The problem is obviously intensified for 'systems of systems' composed of components, whether new or legacy, that may have been developed by different vendors, mixing and matching COTS and 'open source' components. This system composition model is expected to be increasingly common in high performance computing. The usual approach is to train technicians to understand the complexities of these components and their connections, including performance tuning parameters, so that they can then manually monitor and reconfigure the system as needed. We envision instead attaching a 'standard' feedback loop infrastructure to existing distributed systems for the purposes of continual monitoring and dynamically adapting their activities and performance. (This approach can also be applied to 'new' systems, as an alternative to 'building in' adaptation facilities, but we do not address that here.) Our proposed infrastructure consists of multiple layers with the objectives of probing, measuring and reporting of activity and state within the execution of the legacy system among its components and connectors; gauging, analysis and interpretation of the reported events; and possible feedback to focus the probes and gauges to drill deeper, or when necessary - direct but automatic reconfiguration of the running system
On The General Applicability of Instruction-Set Randomization
We describe Instruction-Set Randomization (ISR), a general approach for safeguarding systems against any type of code-injection attack. We apply Kerckhoffs' principle to create OS process-specific randomized instruction sets (e.g., machine instructions) of the system executing potentially vulnerable software. An attacker who does not know the key to the randomization algorithm will inject code that is invalid for that (randomized) environment, causing a runtime exception. Our approach is applicable to machine-language programs and scripting and interpreted languages. We discuss three approaches (protection for Intel x86 executables, Perl scripts, and SQL queries), one from each of the above categories. Our goal is to demonstrate the generality and applicability of ISR as a protection mechanism. Our emulator-based prototype demonstrates the feasibility ISR for x86 executables and should be directly usable on a suitably modified processor. We demonstrate how to mitigate the significant performance impact of emulation-based ISR by using several heuristics to limit the scope of randomized (and interpreted) execution to sections of code that may be more susceptible to exploitation. The SQL prototype consists of an SQL query-randomizing proxy that protects against SQL injection attacks with no changes to database servers, minor changes to CGI scripts, and with negligible performance overhead. Similarly, the performance penalty of a randomized Perl interpreter is minimal. Where the performance impact of our proposed approach is acceptable (i.e., in an already-emulated environment, in the presence of programmable or specialized hardware, or in interpreted languages), it can serve as a broad protection mechanism and complement other security mechanisms
An Integrative Multi-Network and Multi-Classifier Approach to Predict Genetic Interactions
Genetic interactions occur when a combination of mutations results in a surprising phenotype. These interactions capture functional redundancy, and thus are important for predicting function, dissecting protein complexes into functional pathways, and exploring the mechanistic underpinnings of common human diseases. Synthetic sickness and lethality are the most studied types of genetic interactions in yeast. However, even in yeast, only a small proportion of gene pairs have been tested for genetic interactions due to the large number of possible combinations of gene pairs. To expand the set of known synthetic lethal (SL) interactions, we have devised an integrative, multi-network approach for predicting these interactions that significantly improves upon the existing approaches. First, we defined a large number of features for characterizing the relationships between pairs of genes from various data sources. In particular, these features are independent of the known SL interactions, in contrast to some previous approaches. Using these features, we developed a non-parametric multi-classifier system for predicting SL interactions that enabled the simultaneous use of multiple classification procedures. Several comprehensive experiments demonstrated that the SL-independent features in conjunction with the advanced classification scheme led to an improved performance when compared to the current state of the art method. Using this approach, we derived the first yeast transcription factor genetic interaction network, part of which was well supported by literature. We also used this approach to predict SL interactions between all non-essential gene pairs in yeast (http://sage.fhcrc.org/downloads/downloads/predicted_yeast_genetic_interactions.zip). This integrative approach is expected to be more effective and robust in uncovering new genetic interactions from the tens of millions of unknown gene pairs in yeast and from the hundreds of millions of gene pairs in higher organisms like mouse and human, in which very few genetic interactions have been identified to date
Combined artificial bee colony algorithm and machine learning techniques for prediction of online consumer repurchase intention
A novel paradigm in the service sector i.e. services through the web is a progressive mechanism for rendering offerings over diverse environments. Internet provides huge opportunities for companies to provide personalized online services to their customers. But prompt novel web services introduction may unfavorably affect the quality and user gratification. Subsequently, prediction of the consumer intention is of supreme importance in selecting the web services for an application. The aim of study is to predict online consumer repurchase intention and to achieve this objective a hybrid approach which a combination of machine learning techniques and Artificial Bee Colony (ABC) algorithm has been used. The study is divided into three phases. Initially, shopping mall and consumer characteristic’s for repurchase intention has been identified through extensive literature review. Secondly, ABC has been used to determine the feature selection of consumers’ characteristics and shopping malls’ attributes (with > 0.1 threshold value) for the prediction model. Finally, validation using K-fold cross has been employed to measure the best classification model robustness. The classification models viz., Decision Trees (C5.0), AdaBoost, Random Forest (RF), Support Vector Machine (SVM) and Neural Network (NN), are utilized for prediction of consumer purchase intention. Performance evaluation of identified models on training-testing partitions (70-30%) of the data set, shows that AdaBoost method outperforms other classification models with sensitivity and accuracy of 0.95 and 97.58% respectively, on testing data set. This study is a revolutionary attempt that considers both, shopping mall and consumer characteristics in examine the consumer purchase intention.N/